7,111 research outputs found

    Determining WWW User's Next Access and Its Application to Pre-fetching

    Full text link
    World-Wide Web (WWW) services have grown to levels where significant delays are expected to happen. Techniques like pre-fetching are likely to help users to personalize their needs, reducing their waiting times. However, pre-fetching is only effective if the right documents are identified and if user's move is correctly predicted. Otherwise, pre-fetching will only waste bandwidth. Therefore, it is productive to determine whether a revisit will occur or not, before starting pre-fetching. In this paper we develop two user models that help determining user's next move. One model uses Random Walk approximation and the other is based on Digital Signal Processing techniques. We also give hints on how to use such models with a simple pre-fetching technique that we are developing.CNP

    Primordial non-Gaussianity from the covariance of galaxy cluster counts

    Get PDF
    It has recently been proposed that the large-scale bias of dark matter halos depends sensitively on primordial non-Gaussianity of the local form. In this paper we point out that the strong scale dependence of the non-Gaussian halo bias imprints a distinct signature on the covariance of cluster counts. We find that using the full covariance of cluster counts results in improvements on constraints on the non-Gaussian parameter f_(NL) of 3 (1) orders of magnitude relative to cluster counts (counts+clustering variance) constraints alone. We forecast f_(NL) constraints for the upcoming Dark Energy Survey in the presence of uncertainties in the mass-observable relation, halo bias, and photometric redshifts. We find that the Dark Energy Survey can yield constraints on non-Gaussianity of σ(f_(NL))~1–5 even for relatively conservative assumptions regarding systematics. Excess of correlations of cluster counts on scales of hundreds of megaparsecs would represent a smoking-gun signature of primordial non-Gaussianity of the local type

    Fast Distributed Computation of Distances in Networks

    Get PDF
    This paper presents a distributed algorithm to simultaneously compute the diameter, radius and node eccentricity in all nodes of a synchronous network. Such topological information may be useful as input to configure other algorithms. Previous approaches have been modular, progressing in sequential phases using building blocks such as BFS tree construction, thus incurring longer executions than strictly required. We present an algorithm that, by timely propagation of available estimations, achieves a faster convergence to the correct values. We show local criteria for detecting convergence in each node. The algorithm avoids the creation of BFS trees and simply manipulates sets of node ids and hop counts. For the worst scenario of variable start times, each node i with eccentricity ecc(i) can compute: the node eccentricity in diam(G)+ecc(i)+2 rounds; the diameter in 2*diam(G)+ecc(i)+2 rounds; and the radius in diam(G)+ecc(i)+2*radius(G) rounds.Comment: 12 page

    Constraining Dark Energy with Clusters: Complementarity with Other Probes

    Full text link
    The Figure of Merit Science Working Group (FoMSWG) recently forecast the constraints on dark energy that will be achieved prior to the Joint Dark Energy Mission (JDEM) by ground-based experiments that exploit baryon acoustic oscillations, type Ia supernovae, and weak gravitational lensing. We show that cluster counts from on-going and near-future surveys should provide robust, complementary dark energy constraints. In particular, we find that optimally combined optical and Sunyaev-Zel'dovich effect cluster surveys should improve the Dark Energy Task Force (DETF) figure of merit for pre-JDEM projects by a factor of two even without prior knowledge of the nuisance parameters in the cluster mass-observable relation. Comparable improvements are achieved in the forecast precision of parameters specifying the principal component description of the dark energy equation of state parameter as well as in the growth index gamma. These results indicate that cluster counts can play an important complementary role in constraining dark energy and modified gravity even if the associated systematic errors are not strongly controlled.Comment: 6 pages, 3 figures, accepted to Phys. Rev. D. Discussion section adde

    Looking for complication: The case of management education

    Get PDF
    This paper argues that in face of the changes occurring in the organizational world, management education should consider the need to rethink some of its premises and adapt to the new times. The need to complicate management learning due to increased complication in competitive landscapes, is analyzed. Four possibilities of addressing organizational topics in a complicated way are contrasted: the vertical, horizontal, hypertextual, and dialectical approaches. The promises of the dialectical approach are particularly stressed as a more demanding and potentially enriching path for the creation of knowledge about organizations. The test of the four approaches in a group of undergraduate students provides some preliminary data for analyzing the strenghts and weaknesses of our proposal.

    Sensitivity of galaxy cluster dark energy constraints to halo modeling uncertainties

    Full text link
    We perform a sensitivity study of dark energy constraints from galaxy cluster surveys to uncertainties in the halo mass function, bias and the mass-observable relation. For a set of idealized surveys, we evaluate cosmological constraints as priors on sixteen nuisance parameters in the halo modeling are varied. We find that surveys with a higher mass limit are more sensitive to mass-observable uncertainties while surveys with low mass limits that probe more of the mass function shape and evolution are more sensitive to mass function errors. We examine the correlations among nuisance and cosmological parameters. Mass function parameters are strongly positively (negatively) correlated with Omega_DE (w). For the mass-observable parameters, Omega_DE is most sensitive to the normalization and its redshift evolution while w is more sensitive to redshift evolution in the variance. While survey performance is limited mainly by mass-observable uncertainties, the current level of mass function error is responsible for up to a factor of two degradation in ideal cosmological constraints. For surveys that probe to low masses (10^13.5 h^-1 M_sun), even percent-level constraints on model nuisance parameters result in a degradation of ~ sqrt{2} (2) on Omega_DE (w) relative to perfect knowledge.Comment: 13 pages, 5 figures, accepted by PR

    Shadows and strong gravitational lensing: a brief review

    Full text link
    For ultra compact objects (UCOs), Light Rings (LRs) and Fundamental Photon Orbits (FPOs) play a pivotal role in the theoretical analysis of strong gravitational lensing effects, and of BH shadows in particular. In this short review, specific models are considered to illustrate how FPOs can be useful in order to understand some non-trivial gravitational lensing effects. This paper aims at briefly overviewing the theoretical foundations of these effects, touching also some of the related phenomenology, both in General Relativity (GR) and alternative theories of gravity, hopefully providing some intuition and new insights for the underlying physics, which might be critical when testing the Kerr black hole hypothesis.Comment: 32 pages, 9 figures; Review paper in the General Relativity and Gravitation (GRG) Topical Collection "Testing the Kerr spacetime with gravitational-wave and electromagnetic observations" (Guest Editor: Emanuele Berti); v2: Typo corrected and two references adde

    Complex networks vulnerability to module-based attacks

    Full text link
    In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from practical points of view. In this contribution we present a module-based method to efficiently break complex networks. The procedure first identifies the communities in which the network can be represented, then it deletes the nodes (edges) that connect different modules by its order in the betweenness centrality ranking list. We illustrate the method by applying it to various well known examples of social, infrastructure, and biological networks. We show that the proposed method always outperforms vertex (edge) attacks which are based on the ranking of node (edge) degree or centrality, with a huge gain in efficiency for some examples. Remarkably, for the US power grid, the present method breaks the original network of 4941 nodes to many fragments smaller than 197 nodes (4% of the original size) by removing mere 164 nodes (~3%) identified by the procedure. By comparison, any degree or centrality based procedure, deleting the same amount of nodes, removes only 22% of the original network, i.e. more than 3800 nodes continue to be connected after thatComment: 8 pages, 8 figure
    • 

    corecore